146 research outputs found

    Analysis of Dynamic Task Allocation in Multi-Robot Systems

    Full text link
    Dynamic task allocation is an essential requirement for multi-robot systems operating in unknown dynamic environments. It allows robots to change their behavior in response to environmental changes or actions of other robots in order to improve overall system performance. Emergent coordination algorithms for task allocation that use only local sensing and no direct communication between robots are attractive because they are robust and scalable. However, a lack of formal analysis tools makes emergent coordination algorithms difficult to design. In this paper we present a mathematical model of a general dynamic task allocation mechanism. Robots using this mechanism have to choose between two types of task, and the goal is to achieve a desired task division in the absence of explicit communication and global knowledge. Robots estimate the state of the environment from repeated local observations and decide which task to choose based on these observations. We model the robots and observations as stochastic processes and study the dynamics of the collective behavior. Specifically, we analyze the effect that the number of observations and the choice of the decision function have on the performance of the system. The mathematical models are validated in a multi-robot multi-foraging scenario. The model's predictions agree very closely with experimental results from sensor-based simulations.Comment: Preprint version of the paper published in International Journal of Robotics, March 2006, Volume 25, pp. 225-24

    Interaction and Intelligent Behavior

    Get PDF
    We introduce basic behaviors as primitives for control and learning in situated, embodied agents interacting in complex domains. We propose methods for selecting, formally specifying, algorithmically implementing, empirically evaluating, and combining behaviors from a basic set. We also introduce a general methodology for automatically constructing higher--level behaviors by learning to select from this set. Based on a formulation of reinforcement learning using conditions, behaviors, and shaped reinforcement, out approach makes behavior selection learnable in noisy, uncertain environments with stochastic dynamics. All described ideas are validated with groups of up to 20 mobile robots performing safe--wandering, following, aggregation, dispersion, homing, flocking, foraging, and learning to forage

    Getting Humanoids to Move and Imitate THIS BEHAVIOR-BASED APPROACH TO STRUCTURING AND CONTROLLING COMPLEX ROBOTIC SYSTEMS USES IMITATION FOR INTERACTION AND LEARNING. THE USE OF BASIS BEHAVIORS, OR PRIMITIVES, LETS THESE HUMANOID ROBOTS DANCE THE MACARENA

    No full text
    AS ROBOTS INCREASINGLY BE- come part of our everyday lives, they will serve as caretakers for the elderly and disabled, assistants in surgery and rehabilitation, and educational toys. But for this to happen, programming and control must become simpler and human-robot interaction more natural. Both challenges are particularly relevant to humanoid robots, which are highly difficult to control yet most natural for interaction with people and operation in human environments. As this article shows, we have used biologically inspired notions of behavior-based control to address these challenges at the University of Southern California's Interaction Lab, part of the USC Robotics Research Labs. By endowing robots with the ability to imitate, we can program and interact with them through human demonstration, a natural human-humanoid interface. The human ability to imitate-to observe and repeat behaviors performed by a teacher-is a poorly understood but powerful form of skill learning. Two fundamental open problems in imitation involve interpreting and understanding the observed behavior and integrating the visual perception and movement control systems to reconstruct what was observed. Our research has a similarly twofold goal: we are developing methods for segmenting and classifying visual input for recognizing human behavior as well as methods for structuring the motor control system for general movement and imitation-learning capabilities. Our approach brings these two pursuits together much as the evolutionary process brought them together in biological systems. 1,2 We structure the motor system into a collection of movement primitives, which then serve both to generate the humanoid's movement repertoire and to provide prediction and classification capabilities for visual perception and interpretation of movement. This way, what the humanoid can do helps it understand what it sees and vice versa. The more it sees, the more it learns to do, and thus the better it gets at understanding what it sees for further learning; this is the imitation process. Behavior-based robotics Our work over the last 15 years has focused on developing distributed, behavior-based methods for controlling groups of mobile robots and, most recently, humanoids. Behavior-based control involves the design of control systems consisting of a collection of behaviors. THIS BEHAVIOR-BASED APPROACH TO STRUCTURING IEEE INTELLIGENT SYSTEMS The inspiration for behavior-based control comes from biology, where natural systems are believed to be similarly organized, from spinal reflex movements up to more complex behaviors such as flocking and foraging. Basis behaviors and primitives. Several methods for principled behavior design and coordination are possible. Collections of behaviors are a natural representation for controlling collections of robots. But how can we use the same idea in the humanoid control domain, where the body's individual degrees of freedom are more coupled and constrained? For this, we have combined the notion of primitives with another line of evidence from neurosciencemirror neurons-to structure humanoid motor control into a general and robust system capable of a variety of skills and learning by imitation. 6 Humanoid control and imitation. Robot control is a complex problem, involving sensory and effector limitations and various forms of uncertainty. The more complex the system to be controlled, the more we must modularize the approach to make control viable and efficient. Humanoid agents and robots are highly complex; a human arm has seven degrees of freedom (DOF), the hand has 23, and the control of an actuated human spine is beyond current consideration. Yet humans display complex dynamic behaviors in real time and learn various motor skills throughout life, often through imitation. Methods for automating robot programming are in high demand. Reinforcement learning, which lets a robot improve its behavior based on trial-and-error feedback, is very popular. However, reinforcement learning is slow, as the robot must repeatedly try various behaviors in different situations. It can also jeopardize the robot. In contrast, learning by imitation is particularly appealing because it lets the designer specify entire behaviors by demonstration, instead of using low-level programming or trial and error by the robot. In biological systems, imitation appears to be a complex learning mechanism that involves an intricate interaction between visual perception and motor control, both of which are complex in themselves. Although various animals demonstrate simple mimicry, only a very few species, including humans, chimps, and dolphins, are capable of so-called true imitation, which involves the ability to learn arbitrary new skills by observation. Neuroscience inspiration Evidence from neuroscience studies in animals points to two neural structures we find of key relevance to imitation: spinal fields and mirror neurons. Spinal fields, found in frogs and rats so far, code for complete primitive movements (or behaviors), such as reaching and wiping. Investigators recently found neurons with so-called mirror properties in monkeys and humans. They appear to directly connect the visual and motor control systems by mapping observed behaviors, such as reaching and grasping, to motor structures that produce them. We combine these two lines of evidence, spinal basis fields and mirror neurons, into a more sophisticated notion of behaviors, or perceptual-motor primitives. These let a complex system, such as a humanoid, recognize, reproduce, and learn motor skills. The primitives serve as the basis set for generating movements, but also as a vocabulary for classifying observed movements into executable ones. Thus, primitives can classify, predict, and act. In our approach to imitation, the vision system continually matches any observed human movements onto its own set of motor primitives. The primitive, or combination of primitives, that best approximates the observed input also provides the best predictor of what the robot expects to observe next. This expectation facilitates visual segmentation and interpretation of the observed movement. Imitation, then, is a process of matching, classification, and prediction. Learning by imitation, in turn, creates new skills as novel sequences and superpositions of the matched and classified primitives. The hierarchical structure of our imitation approach lets the robot initially observe and imitate a skill, then perfect it through repetition, so that the skill becomes a routine and itself a primitive. The set of primitives can thus adapt over time, to allow for learning arbitrary new skills-that is, for "true" imitation. Choosing the primitives Movement primitives or behaviors are the unifying mechanism between visual perception and motor control in our approach, and choosing the right ones is a research challenge, driven by several constraints. On the one hand, the motor control system imposes physical bottom-up limitations, based on its kinematic and dynamic properties. It also provides topdown constraints from the type of movements the system is expected to perform, because the primitives must be sufficient for the robot's entire movement repertoire. On the other hand, the visual system's structure and inputs influence the choice of primitives for mapping the various observed movements onto its own executable repertoire. To serve as a general and parsimonious basis set, the primitives encode groups or classes of stereotypical movements, invariant to exact position, rate of motion, size, and perspective. Thus, they represent the generic building blocks of motion that can be implemented as parametric motor controllers. Consider a primitive for reaching. Its most important parameter is the goal position of the end point-that is, the hand or held object. It might be further parametrized by a default posture for the entire arm. Such a primitive lets a robot reach toward various goals within a multitude of tasks, from grasping objects and tools, to dancing, to writing and drawing. We used just such a reaching primitive in our experiments to reconstruct the popular dance, the Macarena. What constitutes a good set of primitives? We have experimented with two types: innate and learned. Innate primitives are userselected and preprogrammed. We have demonstrated the effectiveness of a basis set consisting of three types: • discrete straight-line movements of subsets of degrees of freedom, accounting for reaching-type motions; • continuous oscillatory movements of subsets of DOFs, accounting for repetitive motions; 9 and • postures, accounting for large subsets of the body's DOFs. 2 Our approach computes the learned primitives directly from human movement data. We gather different types of such data, using the following methods: vision-based motion tracking of the human upper body (using our tracking system), magnetic markers on the arm (using the FastTrak system), and fullbody joint angle data (using the Sarcos Sensuit). We first reduce the dimensionality of the movement data by employing principal components analysis, wavelet compression, and correlation across multiple DOF. Next, we use clustering techniques to extract patterns of similar movements in the data. These clusters or patterns form the basis for the primitives; the movements in the clusters are generalized and parameterized, to result in primitives for producing a variety of similar movements. Visual classification into primitives. Visual perception is also an important constraint on the primitives and a key component of the imitation process. Because the human (and humanoid) visual attention is resource-limited, it must select the visual features that are most relevant to the given imitation task. Determining what those features are for a given demonstration is a challenging problem. Our previous work showed that people watching videos of arm movements displayed no difference in attention whether they were just watching or intending to subsequently imitate. In both cases, they fixated at the end point-the hand or a held object. Consequently, we can effectively bias the visual perception mechanism toward recognizing movements that it can execute, especially those movements it performs most frequently. The motor control system's structure, and its underlying set of movement primitives, provides key constraints for visual movement recognition and classification. Our primitive classifier uses the primitives' descriptions to segment a given motion based on the movement data. In the experiments described below, we used end-point data for both arms as input for the vector quantization-based classifier. 11 Again, a key issue in classification is representing the primitives such that they account for significant invariances, such as position, rotation, and scaling. Our classification approach forms the original motion into a vector of relative end-point movements between successive frames, then smoothes and normalizes it. At the classification level, we ignore all other information about the movement, such as global position and arm configuration, enabling a small set of high-level primitive representations instead of a potentially prohibitively large set of detailed ones. Other information necessary for correct imitation serves for parameterizing the selected primitives at the level of movement reconstruction and execution. To simplify matching, our approach describes primitives themselves in the same normalized form. For each time step of the observed motion, we compare a fixed- IEEE INTELLIGENT SYSTEMS WE CAN EFFECTIVELY BIAS THE VISUAL PERCEPTION MECHANISM TOWARD RECOGNIZING MOVEMENTS THAT IT CAN EXECUTE, ESPECIALLY THOSE MOVEMENTS IT PERFORMS MOST FREQUENTLY. horizon window to every primitive and select the one that best matches the input. Adjacent windows with identical classifications connect to form continuous segments. For any segments that fail to match the given primitives, our approach uses the reaching primitive to move the end point frame by frame. Because the horizon window is of fixed size, the perception of a distinct match of a primitive applies only for the given timescale. We are currently working on addressing classification at multiple timescales. To validate our approach, we implemented various examples of imitation, including reaching, ball throwing, aerobics moves, and dance, all on humanoid testbeds, taking human demonstrations as input. We also used 3D magnetic marker data from the human arm, gathered from subjects imitating videos of arm movements while wearing FastTrak markers for position recording. (These data were gathered at the National Institutes of Health Resource for the Study of Neural Models of Behavior at the University of Rochester.) We used four markers: near the shoulder, the elbow, the wrist, and the start of the middle finger. The movement data resulting from this experiment serve as input into our imitation system, as well as for automatically learning the primitives. Finally, we used full-body joint angle data gathered with the Sarcos Sensuit, a wearable exoskeleton that simultaneously records the joint positions of 35 DOF: the shoulders, elbows, wrists, hips, knees, ankles, and waist. (These data are obtained through a collaboration with the ATR Dynamic Brain Project at the Human Information Processing Labs in Kyoto, Japan.) We are currently focusing on reproducing the upper-body movements from those data on our testbeds, described next. Evaluation testbeds To properly validate our approach to humanoid motor control and imitation, we use different experimental testbeds. Most of our work so far has been done on Adonis (developed at the Georgia Institute of Technology Animation Lab), a 3D rigid-body simulation of a human with full dynamics As the project progresses, we plan to use physical humanoid robots as the ultimate testbeds for evaluating our imitation architecture. The NASA Robonaut is one candidate, through collaboration with the Johnson Space Center. The Sarcos full-body humanoid robot is another, through collaboration with the ATR Dynamic Brain Project. Both robots are highly complex, built to approximate human body structure as faithfully as practically possible, and feature binocular cameras for embodied visual perception critical for imitation. OUR APPROACH TO HUMANOID motor control and imitation relies on the use of a set of movement primitives. We have experimented with different types of such primitives on different humanoid simulation testbeds. Specifically, we have implemented two versions of the spinal fields found in animals. One closely modeled the frog data, and used a joint-space representation-it controlled individual joints of Adonis's arms. We tested both types of primitives on a sequential motor task, dancing the Macarena. Both proved effective, but each had limitations for particular types of movements. This led us to propose and explore a combination approach, where multiple types of primitives can be sequenced and combined. For example, we constructed a basis behavior set consisting of three types of primitives: • discrete straight-line movements using impedance control; • continuous oscillatory movements using coupled oscillators (or a collection of piece-wise linear segments using impedance control); and • postures, using PD-servos to directly control the joints. We also added a forth type of primitive, for avoidance, implemented as a repulsive vector field. The continuously active fourth primitive combined with whatever other primitive was executing to prevent any collisions 22 IEEE INTELLIGENT SYSTEMS between body parts. In the Macarena, for example, this is necessary for arm movements around and behind the head ( Our goal is not to achieve perfect, completely precise, high-fidelity imitation. While that might be possible, through the use of exact quantitative measurements of the observed movement using signal-processing techniques, it is not what happens in imitation in nature and it is neither necessary nor helpful for our main goals: natural interaction and programming of robots. For those purposes, we aim for an approximation of the observed behavior, one that allows any necessary freedom of interpretation by the humanoid robot, but achieves the task and effectively communicates and interacts with the human. Our goal is also distinct from task-level imitation, which only achieves the demonstration's goal, but does not imitate the behaviors involved. This problem has been studied in assembly robotics, where investigators used a robotic arm to record, segment, interpret, and then repeat a series of visual images of a human performing an object-stacking task
    • …
    corecore